A new Levenberg-Marquardt approach based on Conjugate gradient structure for solving absolute value equations
Authors
Abstract:
In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdirection by the Levenberg-Marquardt direction. The descent property of the direction generatedby new algorithm in each iteration is established. Also, the global convergence of such a methodare established under some mild assumptions. Some numerical results are reported.In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdirection by the Levenberg-Marquardt direction. The descent property of the direction generatedby new algorithm in each iteration is established. Also, the global convergence of such a methodare established under some mild assumptions. Some numerical results are reported.
similar resources
A Three-terms Conjugate Gradient Algorithm for Solving Large-Scale Systems of Nonlinear Equations
Nonlinear conjugate gradient method is well known in solving large-scale unconstrained optimization problems due to it’s low storage requirement and simple to implement. Research activities on it’s application to handle higher dimensional systems of nonlinear equations are just beginning. This paper presents a Threeterm Conjugate Gradient algorithm for solving Large-Scale systems of nonlinear e...
full textA New Cuckoo Search Based Levenberg-Marquardt (CSLM) Algorithm
Back propagation neural network (BPNN) algorithm is a widely used technique in training artificial neural networks. It is also a very popular optimization procedure applied to find optimal weights in a training process. However, traditional back propagation optimized with Levenberg marquardt training algorithm has some drawbacks such as getting stuck in local minima, and network stagnancy. This...
full textOn Levenberg-marquardt-kaczmarz Iterative Methods for Solving Systems of Nonlinear Ill-posed Equations
In this article a modified Levenberg-Marquardt method coupled with a Kaczmarz strategy for obtaining stable solutions of nonlinear systems of ill-posed operator equations is investigated. We show that the proposed method is a convergent regularization method. Numerical tests are presented for a non-linear inverse doping problem based on a bipolar model.
full textA Parameter-self-adjusting Levenberg-marquardt Method for Solving Nonsmooth Equations
A parameter-self-adjusting Levenberg-Marquardt method (PSA-LMM) is proposed for solving a nonlinear system of equations F (x) = 0, where F : R → R is a semismooth mapping. At each iteration, the LM parameter μk is automatically adjusted based on the ratio between actual reduction and predicted reduction. The global convergence of PSALMM for solving semismooth equations is demonstrated. Under th...
full textA New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems
In this paper, two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS}) conjugate gradient method are presented to solve unconstrained optimization problems. A remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition independent of line search method, based on eigenvalue analysis. The globa...
full textTwo Cscs-based Iteration Methods for Solving Absolute Value Equations∗
Recently, two families of HSS-based iteration methods are constructed for solving the system of absolute value equations (AVEs), which is a class of non-differentiable NP-hard problems. In this study, we establish the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method for AVEs involving the Toeplitz matrix. Then, we analyze the convergence of the Picard-CSCS iteration met...
full textMy Resources
Journal title
volume 5 issue 21
pages 5- 14
publication date 2019-12-22
By following a journal you will be notified via email when a new issue of this journal is published.
Hosted on Doprax cloud platform doprax.com
copyright © 2015-2023